37 research outputs found

    The Query Complexity of Mastermind with l_p Distances

    Get PDF
    Consider a variant of the Mastermind game in which queries are l_p distances, rather than the usual Hamming distance. That is, a codemaker chooses a hidden vector y in {-k,-k+1,...,k-1,k}^n and answers to queries of the form ||y-x||_p where x in {-k,-k+1,...,k-1,k}^n. The goal is to minimize the number of queries made in order to correctly guess y. In this work, we show an upper bound of O(min{n,(n log k)/(log n)}) queries for any real 10. Thus, essentially any approximation of this problem is as hard as finding the hidden vector exactly, up to constant factors. Finally, we show that for the noisy version of the problem, i.e., the setting when the codemaker answers queries with any q = (1 +/- epsilon)||y-x||_p, there is no query efficient algorithm

    Performance of β„“1\ell_1 Regularization for Sparse Convex Optimization

    Full text link
    Despite widespread adoption in practice, guarantees for the LASSO and Group LASSO are strikingly lacking in settings beyond statistical problems, and these algorithms are usually considered to be a heuristic in the context of sparse convex optimization on deterministic inputs. We give the first recovery guarantees for the Group LASSO for sparse convex optimization with vector-valued features. We show that if a sufficiently large Group LASSO regularization is applied when minimizing a strictly convex function ll, then the minimizer is a sparse vector supported on vector-valued features with the largest β„“2\ell_2 norm of the gradient. Thus, repeating this procedure selects the same set of features as the Orthogonal Matching Pursuit algorithm, which admits recovery guarantees for any function ll with restricted strong convexity and smoothness via weak submodularity arguments. This answers open questions of Tibshirani et al. and Yasuda et al. Our result is the first to theoretically explain the empirical success of the Group LASSO for convex functions under general input instances assuming only restricted strong convexity and smoothness. Our result also generalizes provable guarantees for the Sequential Attention algorithm, which is a feature selection algorithm inspired by the attention mechanism proposed by Yasuda et al. As an application of our result, we give new results for the column subset selection problem, which is well-studied when the loss is the Frobenius norm or other entrywise matrix losses. We give the first result for general loss functions for this problem that requires only restricted strong convexity and smoothness

    Sharper Bounds for β„“p\ell_p Sensitivity Sampling

    Full text link
    In large scale machine learning, random sampling is a popular way to approximate datasets by a small representative subset of examples. In particular, sensitivity sampling is an intensely studied technique which provides provable guarantees on the quality of approximation, while reducing the number of examples to the product of the VC dimension dd and the total sensitivity S\mathfrak S in remarkably general settings. However, guarantees going beyond this general bound of Sd\mathfrak S d are known in perhaps only one setting, for β„“2\ell_2 subspace embeddings, despite intense study of sensitivity sampling in prior work. In this work, we show the first bounds for sensitivity sampling for β„“p\ell_p subspace embeddings for pβ‰ 2p\neq 2 that improve over the general Sd\mathfrak S d bound, achieving a bound of roughly S2/p\mathfrak S^{2/p} for 1≀p<21\leq p<2 and S2βˆ’2/p\mathfrak S^{2-2/p} for 2<p<∞2<p<\infty. For 1≀p<21\leq p<2, we show that this bound is tight, in the sense that there exist matrices for which S2/p\mathfrak S^{2/p} samples is necessary. Furthermore, our techniques yield further new results in the study of sampling algorithms, showing that the root leverage score sampling algorithm achieves a bound of roughly dd for 1≀p<21\leq p<2, and that a combination of leverage score and sensitivity sampling achieves an improved bound of roughly d2/pS2βˆ’4/pd^{2/p}\mathfrak S^{2-4/p} for 2<p<∞2<p<\infty. Our sensitivity sampling results yield the best known sample complexity for a wide class of structured matrices that have small β„“p\ell_p sensitivity.Comment: To appear in ICML 202

    High-Dimensional Geometric Streaming in Polynomial Space

    Full text link
    Many existing algorithms for streaming geometric data analysis have been plagued by exponential dependencies in the space complexity, which are undesirable for processing high-dimensional data sets. In particular, once dβ‰₯log⁑nd\geq\log n, there are no known non-trivial streaming algorithms for problems such as maintaining convex hulls and L\"owner-John ellipsoids of nn points, despite a long line of work in streaming computational geometry since [AHV04]. We simultaneously improve these results to poly(d,log⁑n)\mathrm{poly}(d,\log n) bits of space by trading off with a poly(d,log⁑n)\mathrm{poly}(d,\log n) factor distortion. We achieve these results in a unified manner, by designing the first streaming algorithm for maintaining a coreset for β„“βˆž\ell_\infty subspace embeddings with poly(d,log⁑n)\mathrm{poly}(d,\log n) space and poly(d,log⁑n)\mathrm{poly}(d,\log n) distortion. Our algorithm also gives similar guarantees in the \emph{online coreset} model. Along the way, we sharpen results for online numerical linear algebra by replacing a log condition number dependence with a log⁑n\log n dependence, answering a question of [BDM+20]. Our techniques provide a novel connection between leverage scores, a fundamental object in numerical linear algebra, and computational geometry. For β„“p\ell_p subspace embeddings, we give nearly optimal trade-offs between space and distortion for one-pass streaming algorithms. For instance, we give a deterministic coreset using O(d2log⁑n)O(d^2\log n) space and O((dlog⁑n)1/2βˆ’1/p)O((d\log n)^{1/2-1/p}) distortion for p>2p>2, whereas previous deterministic algorithms incurred a poly(n)\mathrm{poly}(n) factor in the space or the distortion [CDW18]. Our techniques have implications in the offline setting, where we give optimal trade-offs between the space complexity and distortion of subspace sketch data structures. To do this, we give an elementary proof of a "change of density" theorem of [LT80] and make it algorithmic.Comment: Abstract shortened to meet arXiv limits; v2 fix statements concerning online condition numbe

    New Subset Selection Algorithms for Low Rank Approximation: Offline and Online

    Full text link
    Subset selection for the rank kk approximation of an nΓ—dn\times d matrix AA offers improvements in the interpretability of matrices, as well as a variety of computational savings. This problem is well-understood when the error measure is the Frobenius norm, with various tight algorithms known even in challenging models such as the online model, where an algorithm must select the column subset irrevocably when the columns arrive one by one. In contrast, for other matrix losses, optimal trade-offs between the subset size and approximation quality have not been settled, even in the offline setting. We give a number of results towards closing these gaps. In the offline setting, we achieve nearly optimal bicriteria algorithms in two settings. First, we remove a k\sqrt k factor from a result of [SWZ19] when the loss function is any entrywise loss with an approximate triangle inequality and at least linear growth. Our result is tight for the β„“1\ell_1 loss. We give a similar improvement for entrywise β„“p\ell_p losses for p>2p>2, improving a previous distortion of k1βˆ’1/pk^{1-1/p} to k1/2βˆ’1/pk^{1/2-1/p}. Our results come from a technique which replaces the use of a well-conditioned basis with a slightly larger spanning set for which any vector can be expressed as a linear combination with small Euclidean norm. We show that this technique also gives the first oblivious β„“p\ell_p subspace embeddings for 1<p<21<p<2 with O~(d1/p)\tilde O(d^{1/p}) distortion, which is nearly optimal and closes a long line of work. In the online setting, we give the first online subset selection algorithm for β„“p\ell_p subspace approximation and entrywise β„“p\ell_p low rank approximation by implementing sensitivity sampling online, which is challenging due to the sequential nature of sensitivity sampling. Our main technique is an online algorithm for detecting when an approximately optimal subspace changes substantially.Comment: To appear in STOC 2023; abstract shortene

    Sketching Algorithms for Sparse Dictionary Learning: PTAS and Turnstile Streaming

    Full text link
    Sketching algorithms have recently proven to be a powerful approach both for designing low-space streaming algorithms as well as fast polynomial time approximation schemes (PTAS). In this work, we develop new techniques to extend the applicability of sketching-based approaches to the sparse dictionary learning and the Euclidean kk-means clustering problems. In particular, we initiate the study of the challenging setting where the dictionary/clustering assignment for each of the nn input points must be output, which has surprisingly received little attention in prior work. On the fast algorithms front, we obtain a new approach for designing PTAS's for the kk-means clustering problem, which generalizes to the first PTAS for the sparse dictionary learning problem. On the streaming algorithms front, we obtain new upper bounds and lower bounds for dictionary learning and kk-means clustering. In particular, given a design matrix A∈RnΓ—d\mathbf A\in\mathbb R^{n\times d} in a turnstile stream, we show an O~(nr/Ο΅2+dk/Ο΅)\tilde O(nr/\epsilon^2 + dk/\epsilon) space upper bound for rr-sparse dictionary learning of size kk, an O~(n/Ο΅2+dk/Ο΅)\tilde O(n/\epsilon^2 + dk/\epsilon) space upper bound for kk-means clustering, as well as an O~(n)\tilde O(n) space upper bound for kk-means clustering on random order row insertion streams with a natural "bounded sensitivity" assumption. On the lower bounds side, we obtain a general Ξ©~(n/Ο΅+dk/Ο΅)\tilde\Omega(n/\epsilon + dk/\epsilon) lower bound for kk-means clustering, as well as an Ξ©~(n/Ο΅2)\tilde\Omega(n/\epsilon^2) lower bound for algorithms which can estimate the cost of a single fixed set of candidate centers.Comment: To appear in NeurIPS 202
    corecore